Fast-Convergent Federated Learning
نویسندگان
چکیده
Federated learning has emerged recently as a promising solution for distributing machine tasks through modern networks of mobile devices. Recent studies have obtained lower bounds on the expected decrease in model loss that is achieved each round federated learning. However, convergence generally requires large number communication rounds, which induces delay training and costly terms network resources. In this paper, we propose fast-convergent algorithm, called $\mathsf {FOLB}$ , performs intelligent sampling devices to optimize speed. We first theoretically characterize bound improvement can be if are selected according their local models will provide current global model. Then, show obtains uniform by weighting device updates gradient information. able handle both computation heterogeneity adapting aggregations estimates device’s capabilities contributing updates. evaluate comparison with existing algorithms experimentally its trained accuracy, speed, and/or stability across various datasets.
منابع مشابه
A Fast and Convergent Stochastic Learning Algorithm for MLP
We propose a stochastic learning algorithm for multilayer perceptrons of linearthreshold function units, which theoretically converges with probability one and experimentally (for the three-layer network case) exhibits 100% convergence rate and remarkable speed on parity and simulated problems. On the parity problems (to realize the n bit parity function by n (minimal) hidden units) the algorit...
متن کاملFederated Multi-Task Learning
Federated learning poses new statistical and systems challenges in training machinelearning models over distributed networks of devices. In this work, we show thatmulti-task learning is naturally suited to handle the statistical challenges of thissetting, and propose a novel systems-aware optimization method, MOCHA, that isrobust to practical systems issues. Our method and theor...
متن کاملEntity Resolution and Federated Learning get a Federated Resolution
Consider two data providers, each maintaining records of different feature sets about common entities. They aim to learn a linear model over the whole set of features. This problem of federated learning over vertically partitioned data includes a crucial upstream issue: entity resolution, i.e. finding the correspondence between the rows of the datasets. It is well known that entity resolution, ...
متن کاملFederated Meta-Learning for Recommendation
Recommender systems have been widely studied from the machine learning perspective, where it is crucial to share information among users while preserving user privacy. In this work, we present a federated meta-learning framework for recommendation in which user information is shared at the level of algorithm, instead of model or data adopted in previous approaches. In this framework, user-speci...
متن کاملCertified Convergent Perceptron Learning
Frank Rosenblatt invented the Perceptron algorithm in 1957 as part of an early attempt to build “brain models” – artificial neural networks. In this paper, we apply tools from symbolic logic – dependent type theory as implemented in the interactive theorem prover Coq – to prove that one-layer perceptrons for binary classification converge when trained on linearly separable datasets (the Percept...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: IEEE Journal on Selected Areas in Communications
سال: 2021
ISSN: ['0733-8716', '1558-0008']
DOI: https://doi.org/10.1109/jsac.2020.3036952